Skip to content

Conversation

@JyotinderSingh
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @JyotinderSingh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the quantization capabilities within Keras by introducing a flexible QuantizationConfig API. This new configuration object allows users to specify custom quantizers for both weights and activations, moving beyond predefined quantization settings. The change integrates this new API into key layers like Dense, EinsumDense, Embedding, and ReversibleEmbedding, and refactors GPTQConfig to build upon this new base. This provides a more extensible and user-friendly approach to model quantization, allowing for more tailored optimization strategies.

Highlights

  • New Quantization API: Introduced a new QuantizationConfig API that allows for customizable weight and activation quantizers, providing greater flexibility in defining quantization schemes.
  • Layer Integration: Core layers such as Dense, EinsumDense, Embedding, and ReversibleEmbedding have been updated to accept and utilize the new QuantizationConfig object, enabling fine-grained control over their quantization behavior.
  • GPTQConfig Inheritance: The GPTQConfig now inherits from QuantizationConfig, streamlining the quantization configuration hierarchy and ensuring consistency across different quantization methods.
  • Model Quantization Method Update: The quantize method in the Model class has been enhanced to accept the config argument for all quantization modes, removing the previous restriction that it was only supported for GPTQ mode.
  • Weight-Only Quantization Support: Added explicit handling for weight-only quantization scenarios where inputs_quantizer might be None, ensuring correct scaling in such cases.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@JyotinderSingh JyotinderSingh marked this pull request as draft December 4, 2025 17:56
@JyotinderSingh JyotinderSingh changed the title Introduces customizable quantization API using QuantizationConfig Introduces QuantizationConfig for fine-grained quantization control Dec 4, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a QuantizationConfig to provide a more flexible and customizable quantization API. This is a significant improvement, allowing users to specify their own quantizers for weights and activations, and enabling features like weight-only quantization. The changes are well-implemented across various layers including Dense, EinsumDense, Embedding, and ReversibleEmbedding, as well as the model-level quantize method. The new QuantizationConfig class is well-designed with serialization support, and the accompanying tests are comprehensive. I have a couple of suggestions for minor code improvements to reduce redundancy and enhance clarity.

@codecov-commenter
Copy link

codecov-commenter commented Dec 4, 2025

Codecov Report

❌ Patch coverage is 92.79279% with 16 lines in your changes missing coverage. Please review.
✅ Project coverage is 76.33%. Comparing base (f0a48a6) to head (16e54ba).

Files with missing lines Patch % Lines
keras/src/quantizers/quantization_config.py 90.21% 4 Missing and 5 partials ⚠️
keras/api/_tf_keras/keras/quantizers/__init__.py 0.00% 4 Missing ⚠️
keras/src/layers/core/reversible_embedding.py 92.30% 1 Missing and 1 partial ⚠️
keras/src/layers/core/dense.py 96.15% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21896      +/-   ##
==========================================
+ Coverage   76.30%   76.33%   +0.03%     
==========================================
  Files         580      581       +1     
  Lines       60029    60184     +155     
  Branches     9432     9460      +28     
==========================================
+ Hits        45803    45942     +139     
- Misses      11750    11759       +9     
- Partials     2476     2483       +7     
Flag Coverage Δ
keras 76.20% <91.89%> (+0.03%) ⬆️
keras-jax 62.18% <89.18%> (+0.05%) ⬆️
keras-numpy 57.39% <88.28%> (+0.07%) ⬆️
keras-openvino 34.30% <31.08%> (+<0.01%) ⬆️
keras-torch 63.27% <88.28%> (+0.05%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@JyotinderSingh JyotinderSingh force-pushed the quantization-customization branch 3 times, most recently from 2ae1e37 to a3668d5 Compare December 5, 2025 08:22
@JyotinderSingh
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces QuantizationConfig to provide a more structured and fine-grained control over quantization settings. The changes are well-implemented across various layers and the new configuration class is well-designed. I've found a couple of minor issues related to an unused parameter and an outdated docstring that should be addressed.

Comment on lines 433 to 434
mode: The mode of the quantization. Only 'int8' is supported at this
time.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstring for the mode argument is outdated. It should be updated to reflect all supported quantization modes, such as 'int8', 'int4', 'float8', and 'gptq'. Currently, it only mentions 'int8'. It should also clarify that mode is optional if config is provided.

Suggested change
mode: The mode of the quantization. Only 'int8' is supported at this
time.
mode: The mode for quantization, e.g., `'int8'`, `'int4'`, `'float8'`,
or `'gptq'`. This is optional if a `config` object is provided.

return "float8"


def validate_and_resolve_config(mode, config, name=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The name parameter in validate_and_resolve_config is unused and can be removed to improve code clarity.

Suggested change
def validate_and_resolve_config(mode, config, name=None):
def validate_and_resolve_config(mode, config):

@JyotinderSingh JyotinderSingh force-pushed the quantization-customization branch from 3a31239 to 6917701 Compare December 8, 2025 04:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants